We will introduce various HPCs for different applications of LLMs (Large Language Models).
Introducing models for LLM (Large Language Model) training and models for inference!
For LLM (Large Language Model) training purposes, here it is! APPLIED HPC Deep Type-AS4UX2S8GP-AU019 ■ CPU: Xeon Gold 6530 (32 cores/64 threads/2.1GHz/tb4.0GHz) ■ Memory: 2,048GB (64GB×32) ■ Storage: 1.92TB SSD + 7.68TB SSD ■ GPU: NVIDIA H100 94GB ■ OS: Ubuntu 22.04 LTS ■ Framework: TensorFlow / Pytorch / Chainer (DockerDesktop) ■ Optical Drive: None ■ Power Supply: [4 units] 3,000W/200V - Redundant configuration (2+2) - 80 Plus Titanium certification ■ 3-year send-back hardware warranty 16,800,000 yen (tax included)
- Company:アプライド
- Price:Other